Linear Regression with Python

Imports

Import pandas, numpy, matplotlib,and seaborn. Then set %matplotlib inline (You'll import sklearn as you need it.)

In [1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline

Get the Data

We'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:

  • Avg. Session Length: Average session of in-store style advice sessions.
  • Time on App: Average time spent on App in minutes
  • Time on Website: Average time spent on Website in minutes
  • Length of Membership: How many years the customer has been a member.

Read in the Ecommerce Customers csv file as a DataFrame called customers.

In [2]:
customers = pd.read_csv("Ecommerce Customers")

Check the head of customers, and check out its info() and describe() methods.

In [3]:
#Check the head of the dataframe
customers.head()
Out[3]:
Email Address Avatar Avg. Session Length Time on App Time on Website Length of Membership Yearly Amount Spent
0 mstephenson@fernandez.com 835 Frank Tunnel\nWrightmouth, MI 82180-9605 Violet 34.497268 12.655651 39.577668 4.082621 587.951054
1 hduke@hotmail.com 4547 Archer Common\nDiazchester, CA 06566-8576 DarkGreen 31.926272 11.109461 37.268959 2.664034 392.204933
2 pallen@yahoo.com 24645 Valerie Unions Suite 582\nCobbborough, D... Bisque 33.000915 11.330278 37.110597 4.104543 487.547505
3 riverarebecca@gmail.com 1414 David Throughway\nPort Jason, OH 22070-1220 SaddleBrown 34.305557 13.717514 36.721283 3.120179 581.852344
4 mstephens@davidson-herman.com 14023 Rodriguez Passage\nPort Jacobville, PR 3... MediumAquaMarine 33.330673 12.795189 37.536653 4.446308 599.406092
In [4]:
#We are going to use describe() method to do some statistical analysis for numerical data in the customers dataframe.
customers.describe()
Out[4]:
Avg. Session Length Time on App Time on Website Length of Membership Yearly Amount Spent
count 500.000000 500.000000 500.000000 500.000000 500.000000
mean 33.053194 12.052488 37.060445 3.533462 499.314038
std 0.992563 0.994216 1.010489 0.999278 79.314782
min 29.532429 8.508152 33.913847 0.269901 256.670582
25% 32.341822 11.388153 36.349257 2.930450 445.038277
50% 33.082008 11.983231 37.069367 3.533975 498.887875
75% 33.711985 12.753850 37.716432 4.126502 549.313828
max 36.139662 15.126994 40.005182 6.922689 765.518462
In [5]:
customers.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 8 columns):
Email                   500 non-null object
Address                 500 non-null object
Avatar                  500 non-null object
Avg. Session Length     500 non-null float64
Time on App             500 non-null float64
Time on Website         500 non-null float64
Length of Membership    500 non-null float64
Yearly Amount Spent     500 non-null float64
dtypes: float64(5), object(3)
memory usage: 31.3+ KB

Exploratory Data Analysis

Let's explore the data!

For the rest of the exercise we'll only be using the numerical data of the csv file.

Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. check does the correlation make sense?

In [6]:
sns.set_palette("GnBu_d")
sns.set_style('whitegrid')
In [7]:
# More time on site, more money spent.
sns.jointplot(x='Time on Website',y='Yearly Amount Spent',data=customers)
C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
Out[7]:
<seaborn.axisgrid.JointGrid at 0x9a27320>

Do the same but with the Time on App column instead.

In [8]:
sns.jointplot(x='Time on App',y='Yearly Amount Spent',data=customers)
Out[8]:
<seaborn.axisgrid.JointGrid at 0x9976ef0>
As we can see correlation between Yearly amount spent and time on app. As more time customer spent on app they spennt more

Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.

In [9]:
sns.jointplot(x='Time on App',y='Length of Membership',kind='hex',data=customers)
Out[9]:
<seaborn.axisgrid.JointGrid at 0x9b801d0>

Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.

In [10]:
sns.pairplot(customers)
Out[10]:
<seaborn.axisgrid.PairGrid at 0x9cdca90>

Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?

Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.

In [11]:
sns.lmplot(x='Length of Membership',y='Yearly Amount Spent',data=customers)
Out[11]:
<seaborn.axisgrid.FacetGrid at 0xb7b11d0>
So we can see in above graph the loger customer is member higher they spend monney

Training and Testing Data

Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets. Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column.

In [12]:
# Y varibale with we are going to predict
y = customers['Yearly Amount Spent']
In [13]:
# X variable is our numerical data
X = customers[['Avg. Session Length', 'Time on App','Time on Website', 'Length of Membership']]

Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101

In [14]:
from sklearn.model_selection import train_test_split
In [15]:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)

Training the Model

Now its time to train our model on our training data!

Import LinearRegression from sklearn.linear_model

In [16]:
from sklearn.linear_model import LinearRegression

Create an instance of a LinearRegression() model named lm.

In [17]:
lm = LinearRegression()

Train/fit lm on the training data.

In [18]:
lm.fit(X_train,y_train)
Out[18]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)

Print out the coefficients of the model

In [19]:
# The coefficients
print('Coefficients: \n', lm.coef_)
Coefficients: 
 [25.98154972 38.59015875  0.19040528 61.27909654]

Predicting Test Data

Now that we have fit our model, let's evaluate its performance by predicting off the test values!

Use lm.predict() to predict off the X_test set of the data.

In [20]:
predictions = lm.predict( X_test)
In [21]:
predictions
Out[21]:
array([456.44186104, 402.72005312, 409.2531539 , 591.4310343 ,
       590.01437275, 548.82396607, 577.59737969, 715.44428115,
       473.7893446 , 545.9211364 , 337.8580314 , 500.38506697,
       552.93478041, 409.6038964 , 765.52590754, 545.83973731,
       693.25969124, 507.32416226, 573.10533175, 573.2076631 ,
       397.44989709, 555.0985107 , 458.19868141, 482.66899911,
       559.2655959 , 413.00946082, 532.25727408, 377.65464817,
       535.0209653 , 447.80070905, 595.54339577, 667.14347072,
       511.96042791, 573.30433971, 505.02260887, 565.30254655,
       460.38785393, 449.74727868, 422.87193429, 456.55615271,
       598.10493696, 449.64517443, 615.34948995, 511.88078685,
       504.37568058, 515.95249276, 568.64597718, 551.61444684,
       356.5552241 , 464.9759817 , 481.66007708, 534.2220025 ,
       256.28674001, 505.30810714, 520.01844434, 315.0298707 ,
       501.98080155, 387.03842642, 472.97419543, 432.8704675 ,
       539.79082198, 590.03070739, 752.86997652, 558.27858232,
       523.71988382, 431.77690078, 425.38411902, 518.75571466,
       641.9667215 , 481.84855126, 549.69830187, 380.93738919,
       555.18178277, 403.43054276, 472.52458887, 501.82927633,
       473.5561656 , 456.76720365, 554.74980563, 702.96835044,
       534.68884588, 619.18843136, 500.11974127, 559.43899225,
       574.8730604 , 505.09183544, 529.9537559 , 479.20749452,
       424.78407899, 452.20986599, 525.74178343, 556.60674724,
       425.7142882 , 588.8473985 , 490.77053065, 562.56866231,
       495.75782933, 445.17937217, 456.64011682, 537.98437395,
       367.06451757, 421.12767301, 551.59651363, 528.26019754,
       493.47639211, 495.28105313, 519.81827269, 461.15666582,
       528.8711677 , 442.89818166, 543.20201646, 350.07871481,
       401.49148567, 606.87291134, 577.04816561, 524.50431281,
       554.11225704, 507.93347015, 505.35674292, 371.65146821,
       342.37232987, 634.43998975, 523.46931378, 532.7831345 ,
       574.59948331, 435.57455636, 599.92586678, 487.24017405,
       457.66383406, 425.25959495, 331.81731213, 443.70458331,
       563.47279005, 466.14764208, 463.51837671, 381.29445432,
       411.88795623, 473.48087683, 573.31745784, 417.55430913,
       543.50149858, 547.81091537, 547.62977348, 450.99057409,
       561.50896321, 478.30076589, 484.41029555, 457.59099941,
       411.52657592, 375.47900638])

Create a scatterplot of the real test values versus the predicted values.

In [22]:
plt.scatter(y_test,predictions)
plt.xlabel('Y Test')
plt.ylabel('Predicted Y')
Out[22]:
Text(0, 0.5, 'Predicted Y')

Evaluating the Model

Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).

Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas

In [23]:
# calculate these metrics by hand!
from sklearn import metrics

print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
MAE: 7.228148653430815
MSE: 79.81305165097429
RMSE: 8.933815066978624
In [26]:
# Calculate the R^2
metrics.explained_variance_score(y_test,predictions)
Out[26]:
0.9890771231889607

As we know R^2 value which tells you how much variance your model explained, here in our model explains almost 98% of variance which is pretty good

Residuals

You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data.

Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().

In [24]:
sns.distplot((y_test-predictions),bins=50);
C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval

As we can see above plot is perfectly bell shaped so there is no residuals in our data. and distribution is perfectly normal

Conclusion

We still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.

Recreate the dataframe below.

In [25]:
coeffecients = pd.DataFrame(lm.coef_,X.columns)
coeffecients.columns = ['Coeffecient']
coeffecients
Out[25]:
Coeffecient
Avg. Session Length 25.981550
Time on App 38.590159
Time on Website 0.190405
Length of Membership 61.279097

How can we interpret these coefficients?

Interpreting the coefficients:

  • Holding all other features fixed, a 1 unit increase in Avg. Session Length is associated with an increase of 25.98 total dollars spent.
  • Holding all other features fixed, a 1 unit increase in Time on App is associated with an increase of 38.59 total dollars spent.
  • Holding all other features fixed, a 1 unit increase in Time on Website is associated with an increase of 0.19 total dollars spent.
  • Holding all other features fixed, a 1 unit increase in Length of Membership is associated with an increase of 61.27 total dollars spent.

Do you think the company should focus more on their mobile app or on their website?

This is tricky, there are two ways to think about this: Develop the Website to catch up to the performance of the mobile app, or develop the app more since that is what is working better. This sort of answer really depends on the other factors going on at the company, you would probably want to explore the relationship between Length of Membership and the App or the Website before coming to a conclusion!

Congrats on your contract work! The company loved the insights! Let's move on.

Happy Learning....